Goto

Collaborating Authors

 soft tissue


Would my dog or cat really eat me if I died alone?

Popular Science

Would my dog or cat really eat me if I died alone? As grim as it sounds, it's often expected--and biology explains why. Is man's best friend also a dead man's best friend? Case studies say maybe not. Breakthroughs, discoveries, and DIY tips sent every weekday.


NICE: Neural Implicit Craniofacial Model for Orthognathic Surgery Prediction

Yang, Jiawen, Cao, Yihui, Tian, Xuanyu, Zhang, Yuyao, Wei, Hongjiang

arXiv.org Artificial Intelligence

Orthognathic surgery is a crucial intervention for correcting dentofacial skeletal deformities to enhance occlusal functionality and facial aesthetics. Accurate postoperative facial appearance prediction remains challenging due to the complex nonlinear interactions between skeletal movements and facial soft tissue. Existing biomechanical, parametric models and deep-learning approaches either lack computational efficiency or fail to fully capture these intricate interactions. To address these limitations, we propose Neural Implicit Craniofacial Model (NICE) which employs implicit neural representations for accurate anatomical reconstruction and surgical outcome prediction. NICE comprises a shape module, which employs region-specific implicit Signed Distance Function (SDF) decoders to reconstruct the facial surface, maxilla, and mandible, and a surgery module, which employs region-specific deformation decoders. These deformation decoders are driven by a shared surgical latent code to effectively model the complex, nonlinear biomechanical response of the facial surface to skeletal movements, incorporating anatomical prior knowledge. The deformation decoders output point-wise displacement fields, enabling precise modeling of surgical outcomes. Extensive experiments demonstrate that NICE outperforms current state-of-the-art methods, notably improving prediction accuracy in critical facial regions such as lips and chin, while robustly preserving anatomical integrity. This work provides a clinically viable tool for enhanced surgical planning and patient consultation in orthognathic procedures.


Virtual staining for 3D X-ray histology of bone implants

Irvine, Sarah C., Lucas, Christian, Krüger, Diana, Guedert, Bianca, Moosmann, Julian, Zeller-Plumhoff, Berit

arXiv.org Artificial Intelligence

Three-dimensional X-ray histology techniques offer a non-invasive alternative to conventional 2D histology, enabling volumetric imaging of biological tissues without the need for physical sectioning or chemical staining. However, the inherent greyscale image contrast of X-ray tomography limits its biochemical specificity compared to traditional histological stains. Within digital pathology, deep learning-based virtual staining has demonstrated utility in simulating stained appearances from label-free optical images. In this study, we extend virtual staining to the X-ray domain by applying cross-modality image translation to generate artificially stained slices from synchrotron-radiation-based micro-CT scans. Using over 50 co-registered image pairs of micro-CT and toluidine blue-stained histology from bone-implant samples, we trained a modified CycleGAN network tailored for limited paired data. Whole slide histology images were downsampled to match the voxel size of the CT data, with on-the-fly data augmentation for patch-based training. The model incorporates pixelwise supervision and greyscale consistency terms, producing histologically realistic colour outputs while preserving high-resolution structural detail. Our method outperformed Pix2Pix and standard CycleGAN baselines across SSIM, PSNR, and LPIPS metrics. Once trained, the model can be applied to full CT volumes to generate virtually stained 3D datasets, enhancing interpretability without additional sample preparation. While features such as new bone formation were able to be reproduced, some variability in the depiction of implant degradation layers highlights the need for further training data and refinement. This work introduces virtual staining to 3D X-ray imaging and offers a scalable route for chemically informative, label-free tissue characterisation in biomedical research.


PARASIDE: An Automatic Paranasal Sinus Segmentation and Structure Analysis Tool for MRI

Möller, Hendrik, Krautschick, Lukas, Atad, Matan, Graf, Robert, Busch, Chia-Jung, Beule, Achim, Scharf, Christian, Kaderali, Lars, Menze, Bjoern, Rueckert, Daniel, Kirschke, Jan, Schwitzing, Fabian

arXiv.org Artificial Intelligence

Chronic rhinosinusitis (CRS) is a common and persistent sinus imflammation that affects 5 - 12\% of the general population. It significantly impacts quality of life and is often difficult to assess due to its subjective nature in clinical evaluation. We introduce PARASIDE, an automatic tool for segmenting air and soft tissue volumes of the structures of the sinus maxillaris, frontalis, sphenodalis and ethmoidalis in T1 MRI. By utilizing that segmentation, we can quantify feature relations that have been observed only manually and subjectively before. We performed an exemplary study and showed both volume and intensity relations between structures and radiology reports. While the soft tissue segmentation is good, the automated annotations of the air volumes are excellent. The average intensity over air structures are consistently below those of the soft tissues, close to perfect separability. Healthy subjects exhibit lower soft tissue volumes and lower intensities. Our developed system is the first automated whole nasal segmentation of 16 structures, and capable of calculating medical relevant features such as the Lund-Mackay score.


Image-to-Force Estimation for Soft Tissue Interaction in Robotic-Assisted Surgery Using Structured Light

Wang, Jiayin, Yao, Mingfeng, Wei, Yanran, Guo, Xiaoyu, Zheng, Ayong, Zhao, Weidong

arXiv.org Artificial Intelligence

For Minimally Invasive Surgical (MIS) robots, accurate haptic interaction force feedback is essential for ensuring the safety of interacting with soft tissue. However, most existing MIS robotic systems cannot facilitate direct measurement of the interaction force with hardware sensors due to space limitations. This letter introduces an effective vision-based scheme that utilizes a One-Shot structured light projection with a designed pattern on soft tissue coupled with haptic information processing through a trained image-to-force neural network. The images captured from the endoscopic stereo camera are analyzed to reconstruct high-resolution 3D point clouds for soft tissue deformation. Based on this, a modified PointNet-based force estimation method is proposed, which excels in representing the complex mechanical properties of soft tissue. Numerical force interaction experiments are conducted on three silicon materials with different stiffness. The results validate the effectiveness of the proposed scheme.


Low-Contact Grasping of Soft Tissue with Complex Geometry using a Vortex Gripper

Mykhailyshyn, Roman, Fey, Ann Majewicz

arXiv.org Artificial Intelligence

Soft tissue manipulation is an integral aspect of most surgical procedures; however, the vast majority of surgical graspers used today are made of hard materials, such as metals or hard plastics. Furthermore, these graspers predominately function by pinching tissue between two hard objects as a method for tissue manipulation. As such, the potential to apply too much force during contact, and thus damage tissue, is inherently high. As an alternative approach, gaspers developed using a pneumatic vortex could potentially levitate soft tissue, enabling manipulation with low or even no contact force. In this paper, we present the design and well as a full factorial study of the force characteristics of the vortex gripper grasping soft surfaces with four common shapes, with convex and concave curvature, and ranging over 10 different radii of curvature, for a total of 40 unique surfaces. By changing the parameters of the nozzle elements in the design of the gripper, it was possible to investigate the influence of the mass flow parameters of the vortex gripper on the lifting force for all of these different soft surfaces. An $\pmb{ex}$ $\pmb{vivo}$ experiment was conducted on grasping biological tissues and soft balls of various shapes to show the advantages and disadvantages of the proposed technology. The obtained results allowed us to find limitations in the use of vortex technology and the following stages of its improvement for medical use.


Generating Freeform Endoskeletal Robots

Li, Muhan, Kong, Lingji, Kriegman, Sam

arXiv.org Artificial Intelligence

The automatic design of embodied agents (e.g. robots) has existed for 31 years and is experiencing a renaissance of interest in the literature. To date however, the field has remained narrowly focused on two kinds of anatomically simple robots: (1) fully rigid, jointed bodies; and (2) fully soft, jointless bodies. Here we bridge these two extremes with the open ended creation of terrestrial endoskeletal robots: deformable soft bodies that leverage jointed internal skeletons to move efficiently across land. Simultaneous de novo generation of external and internal structures is achieved by (i) modeling 3D endoskeletal body plans as integrated collections of elastic and rigid cells that directly attach to form soft tissues anchored to compound rigid bodies; (ii) encoding these discrete mechanical subsystems into a continuous yet coherent latent embedding; (iii) optimizing the sensorimotor coordination of each decoded design using model-free reinforcement learning; and (iv) navigating this smooth yet highly non-convex latent manifold using evolutionary strategies. This yields an endless stream of novel species of "higher robots" that, like all higher animals, harness the mechanical advantages of both elastic tissues and skeletal levers for terrestrial travel. It also provides a plug-and-play experimental platform for benchmarking evolutionary design and representation learning algorithms in complex hierarchical embodied systems.


Personalised 3D Human Digital Twin with Soft-Body Feet for Walking Simulation

Loke, Kum Yew, Chan, Sherwin Stephen, Lei, Mingyuan, Johan, Henry, Zuo, Bingran, Ang, Wei Tech

arXiv.org Artificial Intelligence

With the increasing use of assistive robots in rehabilitation and assisted mobility of human patients, there has been a need for a deeper understanding of human-robot interactions particularly through simulations, allowing an understanding of these interactions in a digital environment. There is an emphasis on accurately modelling personalised 3D human digital twins in these simulations, to glean more insights on human-robot interactions. In this paper, we propose to integrate personalised soft-body feet, generated using the motion capture data of real human subjects, into a skeletal model and train it with a walking control policy. Through evaluation using ground reaction force and joint angle results, the soft-body feet were able to generate ground reaction force results comparable to real measured data and closely follow joint angle results of the bare skeletal model and the reference motion. This presents an interesting avenue to produce a dynamically accurate human model in simulation driven by their own control policy while only seeing kinematic information during training.


Minimally Invasive Flexible Needle Manipulation Based on Finite Element Simulation and Cross Entropy Method

Wang, Yanzhou, Chang, Chang, Mei, Junling, Leonard, Simon, Iordachita, Iulian

arXiv.org Artificial Intelligence

Since the needle will be discretized into discrete elements, the Percutaneous needle interventions capture a broad class of complete state of the needle, and the simulation environment minimally invasive diagnosis and treatment procedures, such in general, could involve hundreds of variables, and planning as biopsy [1]-[3], brachytherapy [4], [5], and spinal injection for a minimally invasive insertion and closed-loop control of [6]-[8]. Depending on the clinical procedure, a range the flexible needle becomes a challenging problem. of needles with different gauges, stiffness levels, and tip geometries is available. These inherent needle characteristics Previous works in this domain focus primarily on resolvedrate play a crucial role in determining how the needle moves control, which relies on inverting a numerical inputoutput through soft biological tissues; additionally, surgeons also Jacobian matrix obtained either via Broyden's update employ various techniques, such as rotating or bending the law or simulating small input disturbances [10], [13], [15]- needle, to adjust the position of the needle tip in situ during [18]. Yet obtaining such invertible mapping can be challenging, insertion.


RadBARTsum: Domain Specific Adaption of Denoising Sequence-to-Sequence Models for Abstractive Radiology Report Summarization

Wu, Jinge, Hasan, Abul, Wu, Honghan

arXiv.org Artificial Intelligence

Radiology report summarization is a crucial task that can help doctors quickly identify clinically significant findings without the need to review detailed sections of reports. This study proposes RadBARTsum, a domain-specific and ontology facilitated adaptation of the BART model for abstractive radiology report summarization. The approach involves two main steps: 1) re-training the BART model on a large corpus of radiology reports using a novel entity masking strategy to improving biomedical domain knowledge learning, and 2) fine-tuning the model for the summarization task using the Findings and Background sections to predict the Impression section. Experiments are conducted using different masking strategies. Results show that the re-training process with domain knowledge facilitated masking improves performances consistently across various settings. This work contributes a domain-specific generative language model for radiology report summarization and a method for utilising medical knowledge to realise entity masking language model. The proposed approach demonstrates a promising direction of enhancing the efficiency of language models by deepening its understanding of clinical knowledge in radiology reports.